• Wednesday, October 2, 2024

    In the article "Do All Problems Have Technical Fixes?" Robin K. Hill explores the pervasive belief in the technology imperative, which posits that every problem can be addressed with a technical solution. This notion, often referred to as tech solutionism, suggests that technological advancements are inherently beneficial and should be applied to various challenges, regardless of their nature. Hill critiques this mindset, highlighting its implications and the underlying assumptions that drive it. The discussion begins by referencing the views of various tech leaders who advocate for the transformative power of artificial intelligence (AI) in everyday life. These leaders often promote their products with the assertion that they not only solve significant problems but also enhance users' confidence in technology. Hill notes that this promotion reflects a broader attitude within Silicon Valley, where technology is seen as a panacea for societal issues, often sidelining the potential costs and drawbacks associated with its implementation. Hill delves into the philosophical underpinnings of this tech imperative, questioning the normative value assigned to computing technologies. She argues that the belief in the goodness of technology is not merely a pragmatic observation but a deeper, unexamined conviction that needs scrutiny. The article suggests that this belief is rooted in a series of premises that equate the presence of a problem with the necessity for a technological fix, leading to a cycle of continuous technological development without critical evaluation. The author emphasizes the need to shift the focus from mere assertions of technology's benefits to a more nuanced understanding of the motivations behind promoting AI and other technologies. By analyzing the actions of tech leaders through the lens of philosophical inquiry, Hill identifies a pro-attitude—an inherent judgment of value that drives the promotion of technology. This perspective raises questions about the psychological and cultural factors that contribute to the unwavering confidence in technology as a solution. Ultimately, Hill concludes that the tech imperative is less about logical reasoning and more about a prevailing attitude that technology is inherently good. This realization prompts further inquiry into the psychological motivations behind this belief, suggesting that a deeper understanding of these dynamics is essential for critically engaging with the role of technology in society. The article calls for a more reflective approach to technology, urging stakeholders to consider the broader implications of their reliance on technical fixes for complex problems.

  • Tuesday, March 19, 2024

    Tech has an unhealthy relationship with tools: we use them to define our goals. However, the problem with most tools is that they succumb to “the way of ways” and eventually replace the goal entirely. The cause of the problem is the lack of leadership necessary to define goals beyond “follow the book.”

    Hi Impact
  • Tuesday, June 4, 2024

    Technology has become entangled with the structure of society and we can no longer see the world around us without it. Our data-centric way of seeing the world isn't serving us well. Tools are controlled by human users, whereas machines only do what their designers want. Technologists should build tools rather than machines. Seeing like a data structure is a choice and there are different paths we can take.

    Hi Impact
  • Thursday, October 3, 2024

    The author expresses a deep-seated fatigue with the pervasive use of artificial intelligence (AI) across various domains, particularly in software testing and development. They acknowledge the significant rise in AI applications and the marketing hype surrounding them, which often labels new tools as "game changers" without substantial evidence to support such claims. While the author does not oppose AI outright and recognizes its potential benefits in certain areas, they emphasize a critical perspective on its current implementation and the quality of results it produces. In the realm of software testing, the author reflects on their 18 years of experience, noting that fundamental challenges remain unchanged despite the introduction of AI tools. They argue that simply adding more tools does not address the core issues of test automation, such as the need for well-structured tests and a solid understanding of programming principles. The author points out that many AI-powered solutions prioritize speed over quality, often failing to deliver better results than traditional methods. They stress the importance of human expertise in evaluating and refining AI-generated outputs, asserting that AI should complement rather than replace skilled professionals. As a member of conference program committees, the author has observed a troubling trend of AI-generated proposals that lack originality and depth. They criticize the reliance on AI for crafting proposals, arguing that it diminishes the opportunity for individuals to showcase their unique insights and experiences. The author expresses a firm stance against accepting proposals that appear to be AI-generated, believing that genuine effort and personal input are essential for meaningful contributions to conferences. On a broader human level, the author laments the impact of AI on creativity and emotional expression. They cherish the art created by humans—music, literature, and film—highlighting the emotional connection that these works evoke. In contrast, they find AI-generated content to be uninspiring and devoid of the human touch that makes art resonate. The author raises concerns about the societal implications of AI, including job displacement, financial investments in AI without clear returns, and the environmental impact of AI technologies. While acknowledging that AI can be beneficial in specific contexts, such as healthcare, the author ultimately advocates for a more discerning approach to AI's role in society. They express a desire to see less reliance on AI-generated content across various fields, emphasizing the value of human creativity and expertise in producing meaningful work.

  • Wednesday, August 7, 2024

    Dedicating 20% of engineering capacity to technical debt doesn't work as effectively as it seems. This article highlights five common traps that hinder this approach, such as creating separate backlogs for product and tech, failing to communicate the value of technical work, and diluting focus on specific initiatives. Technical work should be integrated into the product roadmap and clearly demonstrate its business value in order to actually be prioritized.

  • Monday, September 30, 2024

    The author expresses a deep-seated fatigue with the pervasive use of artificial intelligence (AI) across various domains, particularly in software testing and development. They acknowledge the significant rise in AI applications and the marketing hype surrounding them, which often labels new tools as "game changers" without substantial evidence to support such claims. While the author does not oppose AI outright and recognizes its potential benefits in certain contexts, they are critical of how it is often misapplied and overhyped. In the realm of software testing, the author reflects on their 18 years of experience, noting that fundamental challenges, such as the slow and costly nature of full-stack end-to-end tests, remain unchanged. They emphasize that effective test automation requires a solid understanding of programming principles and that there are no shortcuts to achieving quality results. The author argues that many AI-powered tools promise speed but fail to deliver improved outcomes, leading to a preference for quality over mere efficiency. As a member of conference program committees, the author has observed a troubling trend where many proposals appear to be generated by AI tools like ChatGPT. This reliance on AI for proposal writing diminishes the uniqueness and personal touch that should characterize such submissions. The author believes that proposals are an opportunity for individuals to showcase their expertise and insights, and outsourcing this task to AI undermines the value of personal expression and creativity. Consequently, they have adopted a policy of rejecting proposals that seem AI-generated, prioritizing genuine effort and originality. On a broader human level, the author laments the impact of AI on creative expression. They cherish the emotional depth found in music, literature, and film created by humans, contrasting it with the often uninspired output of AI. The author notes a growing concern among people about job security in the face of AI advancements, as well as the financial resources companies invest in AI without clear returns on investment. Additionally, they highlight the environmental implications of AI, particularly its carbon footprint. While acknowledging that AI can be beneficial in specific areas, such as healthcare, the author expresses a desire to distance themselves from the overwhelming tide of AI-generated content that lacks emotional resonance and authenticity. They advocate for a more thoughtful and discerning approach to AI, emphasizing the importance of human creativity and the need to prioritize meaningful contributions over automated outputs.

  • Friday, April 5, 2024

    Observing accessibility barriers personally and their impact on others helps you see how technology can help bridge divides. We need to make digital accessibility a fundamental right and a prerequisite for technology to better humanity. Only when AI, the web, and technology are available to benefit all humankind will they become truly powerful.

  • Friday, April 12, 2024

    The notion that "AI" will negate the importance of accessibility is wrong. Instead, addressing accessibility demands human-centric solutions tailored to real-world scenarios. While current technology offers tools to foster accessibility, adhering to established guidelines can effectively address user needs without significant alteration.

  • Friday, May 31, 2024

    This article challenges three common engineering leadership anti-patterns. First, it argues against always avoiding micromanagement, suggesting that leaders should engage in "conflict mining" to understand context and write down the details of company strategies. Second, it advocates for measuring imperfect but useful metrics over waiting for perfect ones. Lastly, it challenges the idea of managers as umbrellas, suggesting that exposing teams to the "gory details" and providing less buffered information is probably better in the long run.

  • Monday, March 4, 2024

    Software engineers should propose solutions, not just problems, and focus on team goals over just personal goals. Collaboration trumps code perfection. Software engineers should build relationships for influence, as building trust makes it easier to get buy-in and support for one’s ideas.

  • Tuesday, March 12, 2024

    AI advancements in healthcare raise concerns about overlooking patient perspectives and deepening inequalities. Automated decision-making systems often deny resources to the needy, demonstrating biases that could propagate into AI-driven medicine. This article advocates for participatory machine learning and patient-led research to prioritize patient expertise in the medical field.

    Hi Impact